The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
High-definition (HD) semantic map generation of the environment is an essential component of autonomous driving. Existing methods have achieved good performance in this task by fusing different sensor modalities, such as LiDAR and camera. However, current works are based on raw data or network feature-level fusion and only consider short-range HD map generation, limiting their deployment to realistic autonomous driving applications. In this paper, we focus on the task of building the HD maps in both short ranges, i.e., within 30 m, and also predicting long-range HD maps up to 90 m, which is required by downstream path planning and control tasks to improve the smoothness and safety of autonomous driving. To this end, we propose a novel network named SuperFusion, exploiting the fusion of LiDAR and camera data at multiple levels. We benchmark our SuperFusion on the nuScenes dataset and a self-recorded dataset and show that it outperforms the state-of-the-art baseline methods with large margins. Furthermore, we propose a new metric to evaluate the long-range HD map prediction and apply the generated HD map to a downstream path planning task. The results show that by using the long-range HD maps predicted by our method, we can make better path planning for autonomous vehicles. The code will be available at https://github.com/haomo-ai/SuperFusion.
translated by 谷歌翻译
准确的移动对象细分是自动驾驶的重要任务。它可以为许多下游任务提供有效的信息,例如避免碰撞,路径计划和静态地图构建。如何有效利用时空信息是3D激光雷达移动对象分割(LIDAR-MOS)的关键问题。在这项工作中,我们提出了一个新型的深神经网络,利用了时空信息和不同的LiDAR扫描表示方式,以提高LIDAR-MOS性能。具体而言,我们首先使用基于图像图像的双分支结构来分别处理可以从顺序的LiDAR扫描获得的空间和时间信息,然后使用运动引导的注意模块组合它们。我们还通过3D稀疏卷积使用点完善模块来融合LIDAR范围图像和点云表示的信息,并减少对象边界上的伪像。我们验证了我们提出的方法对Semantickitti的LiDAR-MOS基准的有效性。我们的方法在LiDar-Mos IOU方面大大优于最先进的方法。从设计的粗到精细体系结构中受益,我们的方法以传感器框架速率在线运行。我们方法的实现可作为开源可用:https://github.com/haomo-ai/motionseg3d。
translated by 谷歌翻译
在本文中,我们提出了一个名为OcSampler的框架,以探索一个紧凑而有效的视频表示,其中一个短剪辑以获得高效的视频识别。最近的作品宁愿通过根据其重要性选择一个框架作为顺序决策任务的帧采样,而我们呈现了一个专用的学习实例的视频冷凝策略的新范式,以选择仅在单个视频中表示整个视频的信息帧步。我们的基本动机是高效的视频识别任务在于一次地处理整个序列而不是顺序拾取帧。因此,这些策略在一个步骤中与简单而有效的策略网络一起导出从光加权略微脱脂网络。此外,我们以帧编号预算扩展了所提出的方法,使框架能够以尽可能少的帧的高度置信度产生正确的预测。四个基准测试,即ActivityNet,Mini-Kinetics,FCVID,Mini-Sports1M的实验证明了我们在准确性,理论计算费用,实际推理速度方面对先前方法的效果。我们还在不同分类器,采样框架和搜索空间上评估其泛化电量。特别是,我们在ActivityNet上达到76.9%的地图和21.7 GFLOPS,具有令人印象深刻的吞吐量:123.9个视频/ s在单个Titan XP GPU上。
translated by 谷歌翻译
迄今为止,纳米级的活细胞成像仍然具有挑战性。尽管超分辨率显微镜方法使得能够在光学分辨率下方的亚细胞结构的可视化,但空间分辨率仍然足够远,对于体内生物分子的结构重建仍然足够远(即24nm厚度的微管纤维)。在这项研究中,我们提出了一种A-Net网络,并显示通过基于劣化模型的DWDC算法组合A-Net DeeD学习网络,可以显着改善由共聚焦显微镜捕获的细胞骨架图像的分辨率。利用DWDC算法构建新数据集并利用A-Net神经网络的特征(即,层数较少),我们成功地消除了噪声和絮凝结构,最初干扰了原始图像中的蜂窝结构,并改善了空间分辨率使用相对较小的数据集10次。因此,我们得出结论,将A-Net神经网络与DWDC方法结合的所提出的算法是一种合适的和普遍的方法,用于从低分辨率图像中严格的生物分子,细胞和器官的结构细节。
translated by 谷歌翻译
相同地形的不同卫星图像的相对辐射归一化(RRN)对于改变检测,对象分类/分割和映射任务是必要的。但是,传统的RRN模型不强大,通过对象变化扰乱,并且RRN模型精确考虑对象变化无法鲁布布地获取无更改集。本文提出了通过潜在变化噪声建模的自动稳健的相对辐射归一化方法。它们利用先验知识,即在相对辐射尺度化下没有变化点具有小尺度噪声,并且在辐射归一化之后,变化点具有大规模的辐射噪声,组合随机期望最大化方法快速且强大地提取No-Change集以学习相对辐射归一化映射映射函数。这使我们的模型在理论上就是关于概率理论和数学扣除的基础。具体地,当我们选择直方图匹配作为与高斯噪声(HM-RRN-RRN-RRN-MOG)混合的相对辐射算法学习方案(HM-RRN-MOG)的相对辐射归一化学习方案,HM-RRN-MOG模型实现了最佳性能。我们的模型具有强大地反对云/雾气/变化的能力。我们的方法自然地为RRN生成一个强大的评估指示器,即No-Change Set Totor Square error。我们将HM-RRN-MOG模型应用于后一种植被/水变化检测任务,这减少了无辐射对比度和NDVI / NDWI对无变化集的差异,产生了一致和可比的结果。我们利用No-Change集合到建筑物变更检测任务中,有效地减少了伪变化并提高了精度。
translated by 谷歌翻译
对比度学习(CL)已成为无监督表示学习的主要技术,该技术将锚固的增强版本相互接近(正样本),并将其他样品(负)(负)的嵌入到分开。正如最近的研究所揭示的那样,CL可以受益于艰苦的负面因素(与锚定的负面因素)。但是,当我们在图对比度学习中采用现有的其他域的硬采矿技术(GCL)时,我们会观察到有限的好处。我们对该现象进行实验和理论分析,发现它可以归因于图神经网络(GNNS)的信息传递。与其他域中的CL不同,大多数硬否负面因素是潜在的假否(与锚共享同一类的负面因素),如果仅根据锚和本身之间的相似性选择它们,这将不必要地推开同一类的样本。为了解决这种缺陷,我们提出了一种称为\ textbf {progcl}的有效方法,以估计否定的概率是真实的,这构成了更合适的衡量否定性与否定性的衡量标准。此外,我们设计了两个方案(即\ textbf {progcl-weight}和\ textbf {progcl-mix}),以提高GCL的性能。广泛的实验表明,POGCL对基本GCL方法具有显着和一致的改进,并在几个无监督的基准上产生多个最新的结果,甚至超过了受监督的基准。此外,Progcl很容易将基于负面的GCL方法插入以改进性能的GCL方法。我们以\ textColor {magenta} {\ url {https://github.com/junxia97/progcl}}}发布代码。
translated by 谷歌翻译
Video recognition in an open and dynamic world is quite challenging, as we need to handle different settings such as close-set, long-tail, few-shot and open-set. By leveraging semantic knowledge from noisy text descriptions crawled from the Internet, we focus on the general video recognition (GVR) problem of solving different recognition tasks within a unified framework. The core contribution of this paper is twofold. First, we build a comprehensive video recognition benchmark of Kinetics-GVR, including four sub-task datasets to cover the mentioned settings. To facilitate the research of GVR, we propose to utilize external textual knowledge from the Internet and provide multi-source text descriptions for all action classes. Second, inspired by the flexibility of language representation, we present a unified visual-linguistic framework (VLG) to solve the problem of GVR by an effective two-stage training paradigm. Our VLG is first pre-trained on video and language datasets to learn a shared feature space, and then devises a flexible bi-modal attention head to collaborate high-level semantic concepts under different settings. Extensive results show that our VLG obtains the state-of-the-art performance under four settings. The superior performance demonstrates the effectiveness and generalization ability of our proposed framework. We hope our work makes a step towards the general video recognition and could serve as a baseline for future research. The code and models will be available at https://github.com/MCG-NJU/VLG.
translated by 谷歌翻译
In massive multiple-input multiple-output (MIMO) systems, the user equipment (UE) needs to feed the channel state information (CSI) back to the base station (BS) for the following beamforming. But the large scale of antennas in massive MIMO systems causes huge feedback overhead. Deep learning (DL) based methods can compress the CSI at the UE and recover it at the BS, which reduces the feedback cost significantly. But the compressed CSI must be quantized into bit streams for transmission. In this paper, we propose an adaptor-assisted quantization strategy for bit-level DL-based CSI feedback. First, we design a network-aided adaptor and an advanced training scheme to adaptively improve the quantization and reconstruction accuracy. Moreover, for easy practical employment, we introduce the expert knowledge of data distribution and propose a pluggable and cost-free adaptor scheme. Experiments show that compared with the state-of-the-art feedback quantization method, this adaptor-aided quantization strategy can achieve better quantization accuracy and reconstruction performance with less or no additional cost. The open-source codes are available at https://github.com/zhangxd18/QCRNet.
translated by 谷歌翻译
在各个领域(例如政治,健康和娱乐)中的真实和虚假新闻每天都通过在线社交媒体传播,需要对多个领域进行虚假新闻检测。其中,在政治和健康等特定领域中的虚假新闻对现实世界产生了更严重的潜在负面影响(例如,由Covid-19的错误信息引导的流行病)。先前的研究着重于多域假新闻检测,同样采矿和建模域之间的相关性。但是,这些多域方法遇到了SEESAW问题:某些域的性能通常会以损害其他域的性能而改善,这可能导致在特定领域的表现不满意。为了解决这个问题,我们建议一个用于假新闻检测(DITFEND)的域和实例级传输框架,这可以改善特定目标域的性能。为了传递粗粒域级知识,我们从元学习的角度训练了所有域数据的通用模型。为了传输细粒度的实例级知识并将一般模型调整到目标域,我们在目标域上训练语言模型,以评估每个数据实例在源域中的可传递性,并重新赢得每个实例的贡献。两个数据集上的离线实验证明了Ditfend的有效性。在线实验表明,在现实世界中,Ditfend对基本模型带来了更多改进。
translated by 谷歌翻译